10 research outputs found
Literature-Augmented Clinical Outcome Prediction
We present BEEP (Biomedical Evidence-Enhanced Predictions), a novel approach
for clinical outcome prediction that retrieves patient-specific medical
literature and incorporates it into predictive models. Based on each individual
patient's clinical notes, we train language models (LMs) to find relevant
papers and fuse them with information from notes to predict outcomes such as
in-hospital mortality. We develop methods to retrieve literature based on
noisy, information-dense patient notes, and to augment existing outcome
prediction models with retrieved papers in a manner that maximizes predictive
accuracy. Our approach boosts predictive performance on three important
clinical tasks in comparison to strong recent LM baselines, increasing F1 by up
to 5 points and precision@Top-K by a large margin of over 25%.Comment: To appear in Findings of NAACL 2022. Code available at:
https://github.com/allenai/BEE
SynerGPT: In-Context Learning for Personalized Drug Synergy Prediction and Drug Design
Predicting synergistic drug combinations can help accelerate discovery of
cancer treatments, particularly therapies personalized to a patient's specific
tumor via biopsied cells. In this paper, we propose a novel setting and models
for in-context drug synergy learning. We are given a small "personalized
dataset" of 10-20 drug synergy relationships in the context of specific cancer
cell targets. Our goal is to predict additional drug synergy relationships in
that context. Inspired by recent work that pre-trains a GPT language model (LM)
to "in-context learn" common function classes, we devise novel pre-training
schemes that enable a GPT model to in-context learn "drug synergy functions".
Our model -- which does not use any textual corpora, molecular fingerprints,
protein interaction or any other domain-specific knowledge -- is able to
achieve competitive results. We further integrate our in-context approach with
a genetic algorithm to optimize model prompts and select synergy candidates to
test after conducting a patient biopsy. Finally, we explore a novel task of
inverse drug design which can potentially enable the design of drugs that
synergize specifically to target a given patient's "personalized dataset". Our
findings can potentially have an important impact on precision cancer medicine,
and also raise intriguing questions on non-textual pre-training for LMs
Relatedly: Scaffolding Literature Reviews with Existing Related Work Sections
Scholars who want to research a scientific topic must take time to read,
extract meaning, and identify connections across many papers. As scientific
literature grows, this becomes increasingly challenging. Meanwhile, authors
summarize prior research in papers' related work sections, though this is
scoped to support a single paper. A formative study found that while reading
multiple related work paragraphs helps overview a topic, it is hard to navigate
overlapping and diverging references and research foci. In this work, we design
a system, Relatedly, that scaffolds exploring and reading multiple related work
paragraphs on a topic, with features including dynamic re-ranking and
highlighting to spotlight unexplored dissimilar information, auto-generated
descriptive paragraph headings, and low-lighting of redundant information. From
a within-subjects user study (n=15), we found that scholars generate more
coherent, insightful, and comprehensive topic outlines using Relatedly compared
to a baseline paper list
The Semantic Reader Project: Augmenting Scholarly Documents through AI-Powered Interactive Reading Interfaces
Scholarly publications are key to the transfer of knowledge from scholars to
others. However, research papers are information-dense, and as the volume of
the scientific literature grows, the need for new technology to support the
reading process grows. In contrast to the process of finding papers, which has
been transformed by Internet technology, the experience of reading research
papers has changed little in decades. The PDF format for sharing research
papers is widely used due to its portability, but it has significant downsides
including: static content, poor accessibility for low-vision readers, and
difficulty reading on mobile devices. This paper explores the question "Can
recent advances in AI and HCI power intelligent, interactive, and accessible
reading interfaces -- even for legacy PDFs?" We describe the Semantic Reader
Project, a collaborative effort across multiple institutions to explore
automatic creation of dynamic reading interfaces for research papers. Through
this project, we've developed ten research prototype interfaces and conducted
usability studies with more than 300 participants and real-world users showing
improved reading experiences for scholars. We've also released a production
reading interface for research papers that will incorporate the best features
as they mature. We structure this paper around challenges scholars and the
public face when reading research papers -- Discovery, Efficiency,
Comprehension, Synthesis, and Accessibility -- and present an overview of our
progress and remaining open challenges